Goto

Collaborating Authors

 ir drop


DALI-PD: Diffusion-based Synthetic Layout Heatmap Generation for ML in Physical Design

Wu, Bing-Yue, Chhabria, Vidya A.

arXiv.org Artificial Intelligence

--Machine learning (ML) has demonstrated significant promise in various physical design (PD) tasks. However, model gen-eralizability remains limited by the availability of high-quality, large-scale training datasets. Creating such datasets is often computationally expensive and constrained by IP . While very few public datasets are available, they are typically static, slow to generate, and require frequent updates. T o address these limitations, we present DALI-PD, a scalable framework for generating synthetic layout heatmaps to accelerate ML in PD research. DALI-PD uses a diffusion model to generate diverse layout heatmaps via fast inference in seconds. The heatmaps include power, IR drop, congestion, macro placement, and cell density maps. Using DALI-PD, we created a dataset comprising over 20,000 layout configurations with varying macro counts and placements. These heatmaps closely resemble real layouts and improve ML accuracy on downstream ML tasks such as IR drop or congestion prediction.


CFIRSTNET: Comprehensive Features for Static IR Drop Estimation with Neural Network

Liu, Yu-Tung, Cheng, Yu-Hao, Wu, Shao-Yu, Chen, Hung-Ming

arXiv.org Artificial Intelligence

IR drop estimation is now considered a first-order metric due to the concern about reliability and performance in modern electronic products. Since traditional solution involves lengthy iteration and simulation flow, how to achieve fast yet accurate estimation has become an essential demand. In this work, with the help of modern AI acceleration techniques, we propose a comprehensive solution to combine both the advantages of image-based and netlist-based features in neural network framework and obtain high-quality IR drop prediction very effectively in modern designs. A customized convolutional neural network (CNN) is developed to extract PDN features and make static IR drop estimations. Trained and evaluated with the open-source dataset, experiment results show that we have obtained the best quality in the benchmark on the problem of IR drop estimation in ICCAD CAD Contest 2023, proving the effectiveness of this important design topic.


Estimating Voltage Drop: Models, Features and Data Representation Towards a Neural Surrogate

Jin, Yifei, Koutlis, Dimitrios, Bandala, Hector, Daoutis, Marios

arXiv.org Artificial Intelligence

Abstract--Accurate estimation of voltage drop (IR drop) in modern Application-Specific Integrated Circuits (ASICs) is highly time and resource demanding, due to the growing complexity and the transistor density in recent technology nodes. To mitigate this challenge, we investigate how Machine Learning (ML) techniques, including Extreme Gradient Boosting (XGBoost), Convolutional Neural Network (CNN), and Graph Neural Network (GNN) can aid in reducing the computational effort and implicitly the time required to estimate the IR drop in Integrated Circuits (ICs). ML algorithms, on the other hand, are explored as an alternative solution to offer quick and precise IR drop estimation, but in considerably less time. This study illustrates the effectiveness of ML algorithms in precisely estimating IR drop and optimizing ASIC sign-off. Thus, a new round of simulations is required for verification. This process is a standard routine in every ASIC design and manufacturing process, and it is defined as the "sign-off" REDICTION of IR drop is an important problem faced today often by ASIC designers. As the current (I) flows With the transition to larger density integration of transistors, through the Power Distribution Network (PDN), a part of the number of connection layers and interconnections the applied voltage inherently drops across the current path, have increased exponentially over the last decades, driven which is, in simple terms, the definition of IR drop. As a result, while commercial results in voltage drop, or to the grounding (GND), which tools are trying to keep up with the up-scaling demand, results in a ground bounce.


Comparative Evaluation of Memory Technologies for Synaptic Crossbar Arrays- Part 2: Design Knobs and DNN Accuracy Trends

Victor, Jeffry, Wang, Chunguang, Gupta, Sumeet K.

arXiv.org Artificial Intelligence

Crossbar memory arrays have been touted as the workhorse of in-memory computing (IMC)-based acceleration of Deep Neural Networks (DNNs), but the associated hardware non-idealities limit their efficacy. To address this, cross-layer design solutions that reduce the impact of hardware non-idealities on DNN accuracy are needed. In Part 1 of this paper, we established the co-optimization strategies for various memory technologies and their crossbar arrays, and conducted a comparative technology evaluation in the context of IMC robustness. In this part, we analyze various design knobs such as array size and bit-slice (number of bits per device) and their impact on the performance of 8T SRAM, ferroelectric transistor (FeFET), Resistive RAM (ReRAM) and spin-orbit-torque magnetic RAM (SOT-MRAM) in the context of inference accuracy at 7nm technology node. Further, we study the effect of circuit design solutions such as Partial Wordline Activation (PWA) and custom ADC reference levels that reduce the hardware non-idealities and comparatively analyze the response of each technology to such accuracy enhancing techniques. Our results on ResNet-20 (with CIFAR-10) show that PWA increases accuracy by up to 32.56% while custom ADC reference levels yield up to 31.62% accuracy enhancement. We observe that compared to the other technologies, FeFET, by virtue of its small layout height and high distinguishability of its memory states, is best suited for large arrays. For higher bit-slices and a more complex dataset (ResNet-50 with Cifar-100) we found that ReRAM matches the performance of FeFET.


IR-Aware ECO Timing Optimization Using Reinforcement Learning

Chhabria, Vidya A., Jiang, Wenjing, Sapatnekar, Sachin S.

arXiv.org Artificial Intelligence

Engineering change orders (ECOs) in late stages make minimal design fixes to recover from timing shifts due to excessive IR drops. This paper integrates IR-drop-aware timing analysis and ECO timing optimization using reinforcement learning (RL). The method operates after physical design and power grid synthesis, and rectifies IR-drop-induced timing degradation through gate sizing. It incorporates the Lagrangian relaxation (LR) technique into a novel RL framework, which trains a relational graph convolutional network (R-GCN) agent to sequentially size gates to fix timing violations. The R-GCN agent outperforms a classical LR-only algorithm: in an open 45nm technology, it (a) moves the Pareto front of the delay-area tradeoff curve to the left and (b) saves runtime over the classical method by running fast inference using trained models at iso-quality. The RL model is transferable across timing specifications, and transferable to unseen designs with zero-shot learning or fine tuning.


Researchers at Peking University Open-Source 'CircuitNet,' a Dataset for Machine Learning Applications in Electronic Design Automation (EDA)

#artificialintelligence

Electronic design automation (EDA), often known as computer-aided design (CAD), is a class of software tools used to create electronic systems like integrated circuits (ICs). EDA tools enable designers to create a design for large-scale integrated chips (VLSI) with billions of transistors. Due to the size and complexity of current electronic systems, EDA tools are crucial for VLSI design. The EDA research community has recently been actively investigating AI for IC methodologies to design cutting-edge chips, thanks to the explosion of artificial intelligence (AI) algorithms. Numerous studies have investigated machine learning-based solutions for cross-stage prediction tasks in the design cycle to promote speedier design convergence.